Unsupervised Neural-Symbolic Integration

نویسنده

  • Son N. Tran
چکیده

Symbolic has been long considered as a language of human intelligence while neural networks have advantages of robust computation and dealing with noisy data. Integration of neural-symbolic can offer better learning and reasoning while providing a means for interpretability through the representation of symbolic knowledge. Although previous works focus intensively on supervised feedforward neural networks, little has been done for the unsupervised counterparts. In this paper we show how to integrate symbolic knowledge into unsupervised neural networks. We exemplify our approach with knowledge in different forms, including propositional logic for DNA promoter prediction and firstorder logic for understanding family relationship.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Markovian Bias of Neural-based Architectures With Feedback Connections

Dynamic neural network architectures can deal naturally with sequential data through recursive processing enabled by feedback connections. We show how such architectures are predisposed for suffix-based Markovian input sequence representations in both supervised and unsupervised learning scenarios. In particular, in the context of such architectural predispositions, we study computational and l...

متن کامل

A Neural-Symbolic Approach to Natural Language Tasks

Deep learning (DL) has in recent years been widely used in natural language processing (NLP) applications due to its superior performance. However, while natural languages are rich in grammatical structure, DL has not been able to explicitly represent and enforce such structures. This paper proposes a new architecture to bridge this gap by exploiting tensor product representations (TPR), a stru...

متن کامل

Attentive Tensor Product Learning for Language Generation and Grammar Parsing

This paper proposes a new architecture — Attentive Tensor Product Learning (ATPL) — to represent grammatical structures in deep learning models. ATPL is a new architecture to bridge this gap by exploiting Tensor Product Representations (TPR), a structured neural-symbolic model developed in cognitive science, aiming to integrate deep learning with explicit language structures and rules. The key ...

متن کامل

Towards Artificial Minds

The ultimate goal of cognitive sciences is to understand how the mind works and the ultimate goal of neural modeling is to build the artificial mind. Short summary of the state of art in this field is given. The symbolic/neural points of view are complementary rather than exclusive. The Floating Gaussian Model (FGM) introduced in this paper facilitates both neural and symbolic interpretations: ...

متن کامل

Neural-symbolic integration

The field of neural-symbolic integration has received much attention recently. While with propositional paradigms, the integration of symbolic knowledge and connectionist systems (also called artificial neural networks) has already resulted in applicable systems, the theoretical foundations for the first-order case are currently being laid and first perspectives for real implementations are eme...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1706.01991  شماره 

صفحات  -

تاریخ انتشار 2017